Recently, Vehicle-to-Everything(V2X) cooperative perception has attracted increasing attention. Infrastructure sensors play a critical role in this research field, however, how to find the optimal placement of infrastructure sensors is rarely studied. In this paper, we investigate the problem of infrastructure sensor placement and propose a pipeline that can efficiently and effectively find optimal installation positions for infrastructure sensors in a realistic simulated environment. To better simulate and evaluate LiDAR placement, we establish a Realistic LiDAR Simulation library that can simulate the unique characteristics of different popular LiDARs and produce high-fidelity LiDAR point clouds in the CARLA simulator. Through simulating point cloud data in different LiDAR placements, we can evaluate the perception accuracy of these placements using multiple detection models. Then, we analyze the correlation between the point cloud distribution and perception accuracy by calculating the density and uniformity of regions of interest. Experiments show that the placement of infrastructure LiDAR can heavily affect the accuracy of perception. We also analyze the correlation between perception performance in the region of interest and LiDAR point cloud distribution and validate that density and uniformity can be indicators of performance.
translated by 谷歌翻译
由于经过验证的2D检测技术的适用性,大多数当前点云检测器都广泛采用了鸟类视图(BEV)。但是,现有方法通过简单地沿高度尺寸折叠的体素或点特征来获得BEV特征,从而导致3D空间信息的重丢失。为了减轻信息丢失,我们提出了一个基于多级特征降低降低策略的新颖点云检测网络,称为MDRNET。在MDRNET中,空间感知的维度降低(SDR)旨在在体素至BEV特征转换过程中动态关注对象的宝贵部分。此外,提出了多级空间残差(MSR),以融合BEV特征图中的多级空间信息。关于Nuscenes的广泛实验表明,该提出的方法的表现优于最新方法。该代码将在出版时提供。
translated by 谷歌翻译
心肌的准确分割和运动估计在临床领域一直很重要,这基本上有助于下游诊断。但是,现有方法不能始终保证心肌分割的形状完整性。此外,运动估计需要在不同帧上对心肌区域的点对应关系。在本文中,我们提出了一种新型的端到端深度统计形状模型,以关注具有形状完整性和边界对应关系的心肌分割。具体而言,心肌形状由固定数量的点表示,其变化是通过主成分分析(PCA)提取的。深神经网络用于预测转换参数(仿射和变形),然后将其用于将平均点云转转到图像域。此外,引入了一个可区分的渲染层,以将掩码的监督纳入框架中,以了解更准确的点云。通过这种方式,所提出的方法能够在不进行后处理的情况下始终如一地产生解剖上合理的分割掩码。此外,预测的点云还保证了顺序图像的边界对应关系,这有助于下游任务,例如心肌的运动估计。我们进行了几项实验,以证明在几个基准数据集上提出的方法的有效性。
translated by 谷歌翻译
监督的多视图立体声(MVS)方法在重建质量方面取得了显着进步,但遭受了收集大规模基础真相深度的挑战。在本文中,我们提出了一种基于知识蒸馏的MVS的新型自我监督培训管道,称为\ textit {kd-Mvs},主要由自我监督的教师培训和基于蒸馏的学生培训组成。具体而言,使用光度和特征一致性同时以自学的方式对教师模型进行了训练。然后,我们通过概率知识转移将教师模型的知识提炼为学生模型。在对经过验证的知识的监督下,学生模型能够以很大的优势优于其老师。在多个数据集上进行的广泛实验表明,我们的方法甚至可以胜过监督方法。
translated by 谷歌翻译
最近,通过神经网络参数化的隐式神经表示(INR)已成为一种强大而有前途的工具,可以代表不同种类的信号,因为其连续的,可区分的属性,表现出与经典离散表示的优越性。但是,对INR的神经网络的培训仅利用输入输出对,而目标输出相对于输入的衍生物通常忽略了输入。在本文中,我们为目标输出为图像像素的INR提出了一个训练范式,以编码图像衍生物除了神经网络中的图像值外。具体而言,我们使用有限的差异来近似图像导数。我们展示了如何利用训练范式来解决典型的INRS问题,即图像回归和逆渲染,并证明这种训练范式可以提高INR的数据效率和概括能力。我们方法的代码可在\ url {https://github.com/megvii-research/sobolev_inrs}中获得。
translated by 谷歌翻译
在血管成形术的临床程序中(即开放式堵塞冠状动脉),在X射线荧光镜检查的指导下,需要将气球和支架等装置(例如气球和支架)放置在动脉中。由于X射线剂量的局限性,所得图像通常是嘈杂的。为了检查这些设备的正确放置,平均进行了多个运动补偿帧以增强视图。因此,设备跟踪是为此目的的必要过程。即使设计为具有易于跟踪的放射性标记的血管成形术设备,但由于标记尺寸较小和血管成形术中的复杂场景,当前的方法难以提供令人满意的结果。在本文中,我们提出了一个用于单个支架跟踪的端到端深度学习框架,该框架由三个层次模块组成:基于U-NET的Landmark检测,基于重新连接的支架提案和功能提取,以及图形卷积神经网络(GCN)基于暂时聚集空间信息和外观特征的支架跟踪。实验表明,与基于点的跟踪模型相比,我们的方法在检测中的性能明显更好。此外,其快速推理速度满足临床要求。
translated by 谷歌翻译
本地图像功能匹配,旨在识别图像对的识别和相应的相似区域,是计算机视觉中的重要概念。大多数现有的图像匹配方法遵循一对一的分配原则,并采用共同最近的邻居来确保跨图像之间本地特征之间的独特对应关系。但是,来自不同条件的图像可能会容纳大规模变化或观点多样性,以便一对一的分配可能在密集匹配中导致模棱两可或丢失的表示形式。在本文中,我们介绍了一种新颖的无探测器本地特征匹配方法Adamatcher,该方法首先通过轻巧的特征交互模块与密集的特征相关联,并估算了配对图像的可见面积,然后执行贴片级多到 - 一个分配可以预测匹配建议,并最终根据一对一的完善模块进行完善。广泛的实验表明,Adamatcher的表现优于固体基线,并在许多下游任务上实现最先进的结果。此外,多对一分配和一对一的完善模块可以用作其他匹配方法(例如Superglue)的改进网络,以进一步提高其性能。代码将在出版时提供。
translated by 谷歌翻译
在本文中,我们基于我们对多视图立体声(MVS)中的特征匹配的探索来呈现TransVSNet。我们将MVS模拟返回其特征匹配任务的性质,因此提出了一个强大的功能匹配变换器(FMT),以利用(自我)和(交叉)关注(交叉)在图像内和跨越图像中聚合的长程上下文信息。为了便于更好地调整FMT,我们利用自适应接收领域(ARF)模块,以确保在特征范围内平滑过境,并使用特征途径桥接不同阶段,以通过不同尺度的转换特征和梯度。此外,我们应用配对特征相关性以测量特征之间的相似性,并采用歧义降低焦损,以加强监管。据我们所知,TransmVSNet首次尝试将变压器利用到MV的任务。因此,我们的方法在DTU数据集,坦克和寺庙基准测试和BlendedMVS数据集中实现了最先进的性能。我们的方法代码将在https://github.com/megviirobot/transmvsnet中提供。
translated by 谷歌翻译
Transformers do not scale very well to long sequence lengths largely because of quadratic self-attention complexity. In the recent months, a wide spectrum of efficient, fast Transformers have been proposed to tackle this problem, more often than not claiming superior or comparable model quality to vanilla Transformer models. To this date, there is no well-established consensus on how to evaluate this class of models. Moreover, inconsistent benchmarking on a wide spectrum of tasks and datasets makes it difficult to assess relative model quality amongst many models. This paper proposes a systematic and unified benchmark, Long-Range Arena, specifically focused on evaluating model quality under long-context scenarios. Our benchmark is a suite of tasks consisting of sequences ranging from 1K to 16K tokens, encompassing a wide range of data types and modalities such as text, natural, synthetic images, and mathematical expressions requiring similarity, structural, and visual-spatial reasoning. We systematically evaluate ten well-established long-range Transformer models (Reformers, Linformers, Linear Transformers, Sinkhorn Transformers, Performers, Synthesizers, Sparse Transformers, and Longformers) on our newly proposed benchmark suite. Long-Range Arena paves the way towards better understanding this class of efficient Transformer models, facilitates more research in this direction, and presents new challenging tasks to tackle. Our benchmark code will be released at https://github.com/google-research/long-range-arena.
translated by 谷歌翻译
Optimization in multi-task learning (MTL) is more challenging than single-task learning (STL), as the gradient from different tasks can be contradictory. When tasks are related, it can be beneficial to share some parameters among them (cooperation). However, some tasks require additional parameters with expertise in a specific type of data or discrimination (specialization). To address the MTL challenge, we propose Mod-Squad, a new model that is Modularized into groups of experts (a 'Squad'). This structure allows us to formalize cooperation and specialization as the process of matching experts and tasks. We optimize this matching process during the training of a single model. Specifically, we incorporate mixture of experts (MoE) layers into a transformer model, with a new loss that incorporates the mutual dependence between tasks and experts. As a result, only a small set of experts are activated for each task. This prevents the sharing of the entire backbone model between all tasks, which strengthens the model, especially when the training set size and the number of tasks scale up. More interestingly, for each task, we can extract the small set of experts as a standalone model that maintains the same performance as the large model. Extensive experiments on the Taskonomy dataset with 13 vision tasks and the PASCAL-Context dataset with 5 vision tasks show the superiority of our approach.
translated by 谷歌翻译